34 research outputs found

    Vision based interactive toys environment

    Get PDF

    Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach

    Get PDF
    Over the past few years, there has been a noticeable advancement in environmental models and information fusion systems taking advantage of the recent developments in sensor and mobile technologies. However, little attention has been paid so far to quantifying the relationship between environment changes and their impact on our bodies in real-life settings. In this paper, we identify a data driven approach based on direct and continuous sensor data to assess the impact of the surrounding environment and physiological changes and emotion. We aim at investigating the potential of fusing on-body physiological signals, environmental sensory data and on-line self-report emotion measures in order to achieve the following objectives: (1) model the short term impact of the ambient environment on human body, (2) predict emotions based on-body sensors and environmental data. To achieve this, we have conducted a real-world study ‘in the wild’ with on-body and mobile sensors. Data was collected from participants walking around Nottingham city centre, in order to develop analytical and predictive models. Multiple regression, after allowing for possible confounders, showed a noticeable correlation between noise exposure and heart rate. Similarly, UV and environmental noise have been shown to have a noticeable effect on changes in ElectroDermal Activity (EDA). Air pressure demonstrated the greatest contribution towards the detected changes in body temperature and motion. Also, significant correlation was found between air pressure and heart rate. Finally, decision fusion of the classification results from different modalities is performed. To the best of our knowledge this work presents the first attempt at fusing and modelling data from environmental and physiological sources collected from sensors in a real-world setting

    Harnessing digital phenotyping to deliver real-time interventional bio-feedback

    Get PDF
    With the decreasing cost and increasing capability of sensor and mobile technology along with the proliferation of data from social media, ambient environment and other sources, new concepts for digital prognostic and technological quantification of wellbeing are emerging. These concepts are referred to as Digital Phenotyping. One of the main challenges facing these technologies development is connecting how to design an easy to use and personalized devices which benefits from interventional feedback by leveraging on-device processing in real-time. Tangible interfaces designed for wellbeing possess the capabilities to reduce anxiety or manage panic attacks, thus improving the quality of life of the general population and vulnerable members of society. Real-time Bio-feedback presents new opportunities in Artificial Intelligence (AI) with the possibility for mental wellbeing to be inferred automatically allowing interventional feedback to be automatically applied and for the feedback to be individually personalised. This research explores future directions for Bio-feedback including the opportunity to fuse multiple AI enabled feedback mechanisms that can then be utilised collectively or individually

    Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection

    Get PDF
    The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%
    corecore